perm filename PSYCHO[W83,JMC]2 blob
sn#705089 filedate 1983-04-01 generic text, type C, neo UTF8
COMMENT ⊗ VALID 00008 PAGES
C REC PAGE DESCRIPTION
C00001 00001
C00002 00002 psycho[w83,jmc] Notes on article for Psychology Today
C00009 00003 For almost a century we have used machines in our
C00015 00004 We are interested in studying the relations between the
C00016 00005 "Place the control near the bed in a place that is neither hotter nor
C00020 00006 How to understand the servants
C00023 00007 Outline
C00025 00008 The little thoughts of thinking machines
C00027 ENDMK
C⊗;
psycho[w83,jmc] Notes on article for Psychology Today
Illustrations:
1. Rube Goldberg thermostat
1'. I'm just a computer, but ...
2. jmc in Tom Wolfe costume in front of ordinary thermostat.
The main idea of the article is to elaborate the notion of
thermostat to a level where it is hard to describe what is known
about it without using mental qualities.
Anthropomorphism is somewhat ok.
cory.1[let,jmc]
keep reprint rights
Chris Cory 212 725-7535
illustrations
Fragmentary mental qualities
Dog looks where it usually is rather than where it left it.
Reference to "Can computers feel pain?"
Searle?
Apes and mirrors
Natural kinds
"Place the control near the bed in a place that is neither hotter nor
colder than the room itself. If the control is placed on a radiator or
radiant heated floors, it will "think" the entire room is hot and will
lower your blanket temperature, making your bed too cold. If the control
is placed on the window sill in a cold draft, it will "think" the entire
room is cold and will heat up your bed so it will be too hot." - from the
instructions to an electric blanket.
Note that it doesn't require a numerical concept of temperature.
Jumping to conclusions
It is better to oversimplify than to give up.
Sarah McCarthy - "I can, but I won't".
How much of a mind do present machines and programs have?
What do they know?
We have some complicated inductions to do. Concluding that objects are
permanent, that other people are like ourselves. Perhaps we do them,
but perhaps the results are pre-programmed and have only to mature.
Asimov
The key to the paper is the set of examples.
1. thermostat
2. fancy thermostat
3. operating system
It won't run my job because (it thinks) I don't want to run,
my core requirement is too large, I have recently run, my account
is out of money.
We could make them self-conscious, but we haven't.
What kind of computer program should consider itself
on a par with humans in some respect. If it could observe humans
sufficiently, it could learn that certain information could be
obtained in certain places. Perhaps I want my advice giving
program to observe everything I type and learn what it can do
for itself.
We are at the beginning of this, so the next steps are speculative -
even more speculative than the ultimate result.
However, we say the airline schedule is in the guide rather than
saying that the guide knows the schedule.
The business is in the phone book even if the book is in Chinese.
Reply to remarks about certain truth being socially determined.
As Laplace (Lagrange?) said to Napoleon in another connection, "I had no
need of that hypothesis".
What would be required for a computer program to have rights? To
have obligations?
We don't want Asimov's third law.
what makes us human
semi-anthropomorphic
Sherry Turkle paper in Society "blaming computers"
Chris Cory 212 725-7535
4000 words
send ascribing
$1000
relate to current issues
Relate to the design stance and Brainstorms.
Since everyone gets to put in plugs, I'll put in one for defense.
1. criteria for welfare or guilt of the computer.
2. A computer isn't hungry for electricity.
A person might or might not be hungry for some essential food.
The annoyance of a carpenter when he sees a laborer putting
up forms for concrete.
These ideas come from AI.
One answer about how to talk to the servants is to program them,
but that is inadequate, because we don't understand their programs.
1. It expects a line number.
2. It forgot me.
3. It doesn't know me.
4. Ant Hillery and the Chinese room.
5. There are large differences between the minds of humans and those
of the most intelligent programs we can make today.
No program can recognize that it is using an ambiguous concept and
then recover.
Example:
A program running an automatic travel agency.
1. It thinks I want the lowest possible fare.
2. It thinks I'm female, because my name is Evelyn.
Susie's ideas
I never thought he'd ask.
I thought he'd never ask.
All I wanted was an X, and it thinks I'm crazy.
We ascribe mental qualities in the course of designing a program.
For almost a century we have used machines in our
daily lives, whose detailed functioning most of us don't understand.
Few know much about how the electric light system or the telephone system
work internally. We do know their external behavior; we know that
lights are turned on and off by switches and how to dial telephone
numbers. We may not know much about internal combustion engines, but
we know that an automobile must have more gasoline put in its tank
when the gauge reads near EMPTY.
In the next century we are increasingly faced with much
more complex computer based systems. It won't be necessary for
many people to know very much about how they work internally, but
what we will have to know about them in order to use them is more
complex than what we need to know about electric lights and
telephones.
Much that we will have to know concerns the information
contained in them. Many people already use psychological
words such as "knows", "believes", "thinks", "wants" and "likes"
in referring to computer based machines, even though these machines
are quite different from humans, and these words arose from
the human need talk about other humans.
According to many authorities, to use the language of mind to
talk about machines is to commit the intellectual sin of anthropomorphism.
Anthropomorphism is a sin all right, but it is going to be increasingly
difficult to understand machines without using mental terms.
Researchers in articificial intelligence are interested
in the use of mental terms to describe machines for two reasons.
First we have to provide the machines with theories of knowledge
and belief so they can reason about what their users know, don't know,
and want. Second what a user knows about a machine can often be
best expressed using mental terms.
Suppose someone says, "The dog wants to go out".
Because the dog doesn't always want to go out, the statement is informative.
Moreover, the statement is non-commital about what the dog is doing
or will do to further this desire. While most dogs aren't very subtle,
in principle, the clues that cause us to believe, it wants to go out
could be quite intuitive and a person might not be able to say how
he knows.
So when is it useful to say that a computer program wants something?
First, it must give information. It must tell us something about
how this particular occasion is different from other occasions or
how this program is different from other programs. Second, it is
most useful when we can't say what the program will do to realize
its want - when it has several alternative actions and perhaps
others we don't know about. Third, we might not know how
why we believe it wants it. Finally, we instead of talking about
a particular situation, we may be making a general remark. "When
the program wants more paper in the printer, it beeps and displays
a message".
Someone may complain that saying the computer wants more
paper is simply a longwinded way of saying that the printer is
out of paper. However, this may not accurately describe the condition
under which the beep occurs. First, the computer may ask for more
paper under more complicated conditions that we don't precisely
know. For example, when it expects a long print job or expects the
competent operator to go off duty, it may ask for more paper earlier than usual.
Secondly, it may mistakenly believe it is almost out of paper.
For these reasons, it is often more informative, and there is less
risk of error to use the phrase "wants more paper".
We are interested in studying the relations between the
states of certain machines and certain English sentences.
The simplest example is the relation between the state of a
thermostat and the English sentence "The room is too cold".
We don't need the word "thinks" in order to understand
how a thermostat works.
"Place the control near the bed in a place that is neither hotter nor
colder than the room itself. If the control is placed on a radiator or
radiant heated floors, it will "think" the entire room is hot and will
lower your blanket temperature, making your bed too cold. If the control
is placed on the window sill in a cold draft, it will "think" the entire
room is cold and will heat up your bed so it will be too hot." - from the
instructions to an electric blanket.
I suppose most philosophers, psychologists and English teachers
would maintain that the electric blanket manufacturer is guilty of
anthropomorphism in the above instructions, and some will claim that great
harm will come from thus ascribing to machines properties which only
humans can have. I shall argue that saying that the blanket control will
"think" is ok; they could even have left off the quotes. Moreover, our
daily lives will more and more involve interacting with much more
sophisticated computer controlled machines. Understanding and explaining
their behavior well enough to make good use of them will more and more
require ascribing mental qualities to them.
Don't get me wrong. The ordinary anthropomorphism, in which
a person says, "This terminal hates me" and bashes it, is just
as silly as ever. I will argue that there can be machines that
might "hate" you, but it would have to be a lot more complex than
an ordinary computer terminal. The question isn't whether machines have mental
qualities. The real question is this.
Under what conditions is it useful to ascribe what mental qualities to
what machines?
Answer: When it says something informative that cannot as conveniently
be said some other way.
The thermostat is a nice example, because we can describe
its behavior both ways - using mentalistic words (as above) or purely
in physical terms.
The machine promises to do something. It owes me $5.00 and an
explanation.
Why computers have a bad name. Lack of on-line ness. They're unconscious
most of the time.
Identifying the computer with the program is often a mistake. Searliness.
How to understand the servants
This article is about how to understand your servants and
how to speak to them.
In the next twenty years people will get more and more services
through direct interaction with computers.
We can regard computers as mechanical servants just as we regard
electric lights. The power system that gets us electric
light is very complicated with its nuclear power stations,
its transmission system with power sharing among utilities,
and its complicated financial and legal institutions. However,
to use it, we only need know how to work a switch and to change
a light bulb. Some uses of a computer are equally simple, but
to get the most use out of a computer, we have to understand
a lot more about it than how to work an on-off switch.
Fortunately, we will be able to use many intuitive psychological
concepts we have developed for understanding our fellow humans provided we
are careful to use what concepts are appropriate and avoid those that may
be tempting but are misleading. We can often best express what
we know about a machine, especially a computer controlled machine,
by saying that it ⊗believes something, that it ⊗wants something,
that it has ⊗promised something or that it ⊗intends to do something.
A machine may sometimes appropriately tell us, %2"I can, but I won't"%1.
The ideas we will use come from artificial intelligence research.
Many people will object to thus ascribing mental qualities to
machines, and their objections have a substantial basis in past
experience. Firstly, it is a common joke to ascribe emotions
to machines. Murphy's laws are examples. Also %2"My car
hates me"%1.
Outline
good and bad anthropomorphism
how do we come to ascribe mental qualities to other people and
to ourselves?
examples of mental qualities and the criteria for their ascription
examples of systems that presently exhibit mental qualities
example of a future system with more mental qualities
conclusion
Life will be more interesting with computers to psychologize.
They will have the psychology that is convenient to their designers.
They'll be fascist bastards if the programmers don't think twice.
References:
My article
Brainstorms
Margaret Boden
Nilsson's book and Nilsson's readings
People will say a computer promised without first reading Searle on
speech acts.
The little thoughts of thinking machines
Does my computer love me?
No, it doesn't.
Can you make one that will love me?
Well, maybe. It might be difficult, though.
Self-conscious.
Should we use words like ⊗believes, ⊗wants, ⊗intends, ⊗promises,
and ⊗owes in explaining computer programs?
Computer programs are already very complicated and they will get
more so.
Who can find me a really complicated temperature control mechanism?
Well, I can make it love you, but then it will be jealous when
you use other computers.
JOE'S PROGRAMMING SHOP
ACME PROGRAMMING
Certainly I can make the program love you.
Do you want it to be jealous when you use other programs
in the same computer?
There is only one God and he is John McCarthy and
his prophet begins at location 37254.